Product Code Database
Example Keywords: mario -robots $34-114
barcode-scavenger
   » » Wiki: Gram Matrix
Tag Wiki 'Gram Matrix'.
Tag

In , the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an inner product space is the of , whose entries are given by the G_{ij} = \left\langle v_i, v_j \right\rangle., p.441, Theorem 7.2.10 If the vectors v_1,\dots, v_n are the columns of matrix X then the Gram matrix is X^\dagger X in the general case that the vector coordinates are complex numbers, which simplifies to X^\top X for the case that the vector coordinates are real numbers.

An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the of the Gram matrix) is non-zero.

It is named after Jørgen Pedersen Gram.


Examples
For finite-dimensional real vectors in \mathbb{R}^n with the usual Euclidean , the Gram matrix is G = V^\top V, where V is a matrix whose columns are the vectors v_k and V^\top is its whose rows are the vectors v_k^\top. For vectors in \mathbb{C}^n, G = V^\dagger V, where V^\dagger is the conjugate transpose of V.

Given square-integrable functions \{\ell_i(\cdot),\, i = 1,\dots,n\} on the interval \leftt_0,, the Gram matrix G = \leftG_{ij}\right is:

G_{ij} = \int_{t_0}^{t_f} \ell_i^*(\tau)\ell_j(\tau)\, d\tau.
where \ell_i^*(\tau) is the complex conjugate of \ell_i(\tau).

For any B on a finite-dimensional over any field we can define a Gram matrix G attached to a set of vectors v_1, \dots, v_n by G_{ij} = B\left(v_i, v_j\right). The matrix will be symmetric if the bilinear form B is symmetric.


Applications
  • In Riemannian geometry, given an embedded k-dimensional Riemannian manifold M\subset \mathbb{R}^n and a parametrization \phi: U\to M for the volume form \omega on M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: \omega = \sqrt{\det G}\ dx_1 \cdots dx_k,\quad G = \left\left\langle. This generalizes the classical surface integral of a parametrized surface \phi:U\to S\subset \mathbb{R}^3 for (x, y)\in U\subset\mathbb{R}^2: \int_S f\ dA = \iint_U f(\phi(x, y))\, \left|\frac{\partial\phi}{\partial x}\,{\times}\,\frac{\partial\phi}{\partial y}\right|\, dx\, dy.
  • If the vectors are centered , the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
  • In quantum chemistry, the Gram matrix of a set of is the .
  • In (or more generally ), the controllability Gramian and observability Gramian determine properties of a linear system.
  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
  • In , are often represented as Gram matrices. (Also see kernel PCA)
  • Since the Gram matrix over the reals is a , it is and its are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.


Properties

Positive-semidefiniteness
The Gram matrix is in the case the inner product is real-valued; it is in the general, complex case by definition of an .

The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

 x^\dagger \mathbf{G} x =
 \sum_{i,j}x_i^* x_j\left\langle v_i, v_j \right\rangle =
 \sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle =
 \biggl\langle \sum_i x_i v_i, \sum_j x_j v_j \biggr\rangle =
 \biggl\| \sum_i x_i v_i \biggr\|^2 \geq 0 .
     

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the , and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors v_i are linearly independent (that is, \sum_i x_i v_i \neq 0 for all x).


Finding a vector realization
Given any positive semidefinite matrix M, one can decompose it as:
M = B^\dagger B,

where B^\dagger is the conjugate transpose of B (or M = B^\textsf{T} B in the real case).

Here B is a k \times n matrix, where k is the of M. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of M.

The columns b^{(1)}, \dots, b^{(n)} of B can be seen as n vectors in \mathbb{C}^k (or k-dimensional Euclidean space \mathbb{R}^k, in the real case). Then

M_{ij} = b^{(i)} \cdot b^{(j)}

where the a \cdot b = \sum_{\ell=1}^k a_\ell^* b_\ell is the usual inner product on \mathbb{C}^k.

Thus a M is positive semidefinite if and only if it is the Gram matrix of some vectors b^{(1)}, \dots, b^{(n)}. Such vectors are called a vector realization of The infinite-dimensional analog of this statement is Mercer's theorem.


Uniqueness of vector realizations
If M is the Gram matrix of vectors v_1,\dots,v_n in \mathbb{R}^k then applying any rotation or reflection of \mathbb{R}^k (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any k \times k orthogonal matrix Q, the Gram matrix of Q v_1,\dots, Q v_n is also

This is the only way in which two real vector realizations of M can differ: the vectors v_1,\dots,v_n are unique up to orthogonal transformations. In other words, the dot products v_i \cdot v_j and w_i \cdot w_j are equal if and only if some rigid transformation of \mathbb{R}^k transforms the vectors v_1,\dots,v_n to w_1, \dots, w_n and 0 to 0.

The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors v_1, \dots, v_n is equal to the Gram matrix of vectors w_1, \dots, w_n in \mathbb{C}^k then there is a k \times k matrix U (meaning U^\dagger U = I) such that v_i = U w_i for i = 1, \dots, n., p. 452, Theorem 7.3.11


Other properties
  • Because G = G^\dagger, it is necessarily the case that G and G^\dagger commute. That is, a real or complex Gram matrix G is also a .
  • The Gram matrix of any orthonormal basis is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real is the identity matrix. Likewise, the Gram matrix of the rows or columns of a is the identity matrix.
  • The rank of the Gram matrix of vectors in \mathbb{R}^k or \mathbb{C}^k equals the dimension of the space by these vectors.


Gram determinant
The Gram determinant or Gramian is the determinant of the Gram matrix: \bigl|G(v_1, \dots, v_n)\bigr| = \begin{vmatrix}
 \langle v_1,v_1\rangle & \langle v_1,v_2\rangle &\dots & \langle v_1,v_n\rangle \\
 \langle v_2,v_1\rangle & \langle v_2,v_2\rangle &\dots & \langle v_2,v_n\rangle \\
 \vdots & \vdots & \ddots & \vdots \\
 \langle v_n,v_1\rangle & \langle v_n,v_2\rangle &\dots & \langle v_n,v_n\rangle
     
\end{vmatrix}.

If v_1, \dots, v_n are vectors in \mathbb{R}^m then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular. When the determinant and volume are zero. When , this reduces to the standard theorem that the absolute value of the determinant of n n-dimensional vectors is the n-dimensional volume. The volume of the formed by the vectors is .

When v_1, \dots, v_n are linearly independent, the distance between a point x and the linear span of v_1, \dots, v_n is \sqrt{\frac

}.

Consider the moment problem: given c_1, \dots, c_n \in \mathbb C, find a vector v such that \left\langle v, v_i\right\rangle=c_i, for all 1 \leqslant i \leqslant n. There exists a unique solution with minimal norm:v=-\frac{1}{G\left(v_1, v_2, \ldots, v_n\right)} \det \begin{bmatrix} 0 & c_1 & c_2 & \cdots & c_n \\ v_1 & \left\langle v_1, v_1\right\rangle & \left\langle v_1, v_2\right\rangle & \cdots & \left\langle v_1, v_n\right\rangle \\ v_2 & \left\langle v_2, v_1\right\rangle & \left\langle v_2, v_2\right\rangle & \cdots & \left\langle v_2, v_n\right\rangle \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ v_n & \left\langle v_n, v_1\right\rangle & \left\langle v_n, v_2\right\rangle & \cdots & \left\langle v_n, v_n\right\rangle \end{bmatrix}The Gram determinant can also be expressed in terms of the of vectors by

\bigl|G(v_1, \dots, v_n)\bigr| = \| v_1 \wedge \cdots \wedge v_n\|^2.

The Gram determinant therefore supplies an inner product for the space . If an orthonormal basis e i, on is given, the vectors

e_{i_1} \wedge \cdots \wedge e_{i_n},\quad i_1 < \cdots < i_n,
will constitute an orthonormal basis of n-dimensional volumes on the space . Then the Gram determinant \bigl|G(v_1, \dots, v_n)\bigr| amounts to an n-dimensional Pythagorean Theorem for the volume of the parallelotope formed by the vectors v_1 \wedge \cdots \wedge v_n in terms of its projections onto the basis volumes e_{i_1} \wedge \cdots \wedge e_{i_n}.

When the vectors v_1, \ldots, v_n \in \mathbb{R}^m are defined from the positions of points p_1, \ldots, p_n relative to some reference point p_{n+1},

(v_1, v_2, \ldots, v_n) = (p_1 - p_{n+1}, p_2 - p_{n+1}, \ldots, p_n - p_{n+1})\,,
then the Gram determinant can be written as the difference of two Gram determinants,
\bigl|G(v_1, \dots, v_n)\bigr| = \bigl|G((p_1, 1), \dots, (p_{n+1}, 1))\bigr| - \bigl|G(p_1, \dots, p_{n+1})\bigr|\,, where each (p_j, 1) is the corresponding point p_j supplemented with the coordinate value of 1 for an (m+1)-st dimension. Note that in the common case that , the second term on the right-hand side will be zero.


Constructing an orthonormal basis
Given a set of linearly independent vectors \{v_i\} with Gram matrix G defined by G_{ij}:= \langle v_i,v_j\rangle, one can construct an orthonormal basis
u_i := \sum_j \bigl(G^{-1/2}\bigr)_{ji} v_j.
In matrix notation, U = V G^{-1/2} , where U has orthonormal basis vectors \{u_i\} and the matrix V is composed of the given column vectors \{v_i\}.

The matrix G^{-1/2} is guaranteed to exist. Indeed, G is Hermitian, and so can be decomposed as G=UDU^\dagger with U a unitary matrix and D a real diagonal matrix. Additionally, the v_i are linearly independent if and only if G is positive definite, which implies that the diagonal entries of D are positive. G^{-1/2} is therefore uniquely defined by G^{-1/2}:=UD^{-1/2}U^\dagger. One can check that these new vectors are orthonormal:

\begin{align}
\langle u_i,u_j \rangle &= \sum_{i'} \sum_{j'} \Bigl\langle \bigl(G^{-1/2}\bigr)_{i'i} v_{i'},\bigl(G^{-1/2}\bigr)_{j'j} v_{j'} \Bigr\rangle \\10mu &= \sum_{i'} \sum_{j'} \bigl(G^{-1/2}\bigr)_{ii'} G_{i'j'} \bigl(G^{-1/2}\bigr)_{j'j} \\8mu &= \bigl(G^{-1/2} G G^{-1/2}\bigr)_{ij} = \delta_{ij} \end{align} where we used \bigl(G^{-1/2}\bigr)^\dagger=G^{-1/2} .


See also
  • Controllability Gramian
  • Observability Gramian


External links

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time